125 research outputs found

    Learning Maximal Margin Markov Networks via Tractable Convex Optimization

    No full text
    Показано, что обучение марковской сети общего вида может быть представлено в виде задачи выпуклой оптимизации. Основная идея метода заключается в использовании LP-релаксации (max,+)-задачи непосредственно при формулировании задачи обучения.It is shown that the learning of a general Markov network can be represented as a convex optimization problem. The key idea of the method is to use a linear programming relaxation of the (max,+)-problem directly in the formulation of the learning problem.Показано, що навчання марківської мережі загального вигляду може бути подано у вигляді задачі опуклої оптимізації. Основна ідея методу полягає у використанні LP-релаксації (max,+)-задачі безпосередньо при формулюванні задачі навчання

    Flow-based detection and proxy-based evasion of encrypted malware C2 traffic

    Full text link
    State of the art deep learning techniques are known to be vulnerable to evasion attacks where an adversarial sample is generated from a malign sample and misclassified as benign. Detection of encrypted malware command and control traffic based on TCP/IP flow features can be framed as a learning task and is thus vulnerable to evasion attacks. However, unlike e.g. in image processing where generated adversarial samples can be directly mapped to images, going from flow features to actual TCP/IP packets requires crafting the sequence of packets, with no established approach for such crafting and a limitation on the set of modifiable features that such crafting allows. In this paper we discuss learning and evasion consequences of the gap between generated and crafted adversarial samples. We exemplify with a deep neural network detector trained on a public C2 traffic dataset, white-box adversarial learning, and a proxy-based approach for crafting longer flows. Our results show 1) the high evasion rate obtained by using generated adversarial samples on the detector can be significantly reduced when using crafted adversarial samples; 2) robustness against adversarial samples by model hardening varies according to the crafting approach and corresponding set of modifiable features that the attack allows for; 3) incrementally training hardened models with adversarial samples can produce a level playing field where no detector is best against all attacks and no attack is best against all detectors, in a given set of attacks and detectors. To the best of our knowledge this is the first time that level playing field feature set- and iteration-hardening are analyzed in encrypted C2 malware traffic detection.Comment: 9 pages, 6 figure

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Detecting Distributed Denial of Service Attacks in Neighbour Discovery Protocol Using Machine Learning Algorithm Based on Streams Representation

    Get PDF
    © 2018, Springer International Publishing AG, part of Springer Nature. The main protocol of the Internet protocol version 6 suites is the neighbour discovery protocol, which is geared towards substitution of address resolution protocol, router discovery, and function redirection in Internet protocol version 4. Internet protocol version 6 nodes employ neighbour discovery protocol to detect linked hosts and routers in Internet protocol version 6 network without the dependence on dynamic host configuration protocol server, which has earned the neighbour discovery protocol the title of the stateless protocol. The authentication process of the neighbour discovery protocol exhibits weaknesses that make this protocol vulnerable to attacks. Denial of service attacks can be triggered by a malicious host through the introduction of spoofed addresses in neighbour discovery protocol messages. Internet version 6 protocols are not well supported by Network Intrusion Detection System as is the case with Internet Protocol version 4 protocols. Several data mining techniques have been introduced to improve the classification mechanism of Intrusion detection system. In addition, extensive researches indicated that there is no Intrusion Detection system for Internet Protocol version 6 using advanced machine-learning techniques toward distributed denial of service attacks. This paper aims to detect Distributed Denial of Service attacks of the Neighbour Discovery protocol using machine-learning techniques, due to the severity of the attacks and the importance of Neighbour Discovery protocol in Internet Protocol version 6. Decision tree algorithm and Random Forest Algorithm showed high accuracy results in comparison to the other benchmarked algorithms

    Effectiveness evaluation of data mining based IDS

    Get PDF
    Proceeding of: 6th Industrial Conference on Data Mining, ICDM 2006, Leipzig, Germany, July 14-15, 2006.Data mining has been widely applied to the problem of Intrusion Detection in computer networks. However, the misconception of the underlying problem has led to out of context results. This paper shows that factors such as the probability of intrusion and the costs of responding to detected intrusions must be taken into account in order to compare the effectiveness of machine learning algorithms over the intrusion detection domain. Furthermore, we show the advantages of combining different detection techniques. Results regarding the well known 1999 KDD dataset are shown.Publicad

    IDS Based on Bio-inspired Models

    Get PDF
    Unsupervised projection approaches can support Intrusion Detection Systems for computer network security. The involved technologies assist a network manager in detecting anomalies and potential threats by an intuitive display of the progression of network traffic. Projection methods operate as smart compression tools and map raw, high-dimensional traffic data into 2-D or 3-D spaces for subsequent graphical display. The paper compares three projection methods, namely, Cooperative Maximum Likelihood Hebbian Learning, Auto-Associative Back-Propagation networks and Principal Component Analysis. Empirical tests on anomalous situations related to the Simple Network Management Protocol (SNMP) confirm the validity of the projection-based approach. One of these anomalous situations (the SNMP community search) is faced by these projection models for the first time. This work also highlights the importance of the time-information dependence in the identification of anomalous situations in the case of the applied methods

    Inhibitor of Kappa B Epsilon (IκBε) Is a Non-Redundant Regulator of c-Rel-Dependent Gene Expression in Murine T and B Cells

    Get PDF
    Inhibitors of kappa B (IκBs) -α, -β and -ε effect selective regulation of specific nuclear factor of kappa B (NF-κB) dimers according to cell lineage, differentiation state or stimulus, in a manner that is not yet precisely defined. Lymphocyte antigen receptor ligation leads to degradation of all three IκBs but activation only of subsets of NF-κB-dependent genes, including those regulated by c-Rel, such as anti-apoptotic CD40 and BAFF-R on B cells, and interleukin-2 (IL-2) in T cells. We report that pre-culture of a mouse T cell line with tumour necrosis factor-α (TNF) inhibits IL-2 gene expression at the level of transcription through suppressive effects on NF-κB, AP-1 and NFAT transcription factor expression and function. Selective upregulation of IκBε and suppressed nuclear translocation of c-Rel were very marked in TNF-treated, compared to control cells, whether activated via T cell receptor (TCR) pathway or TNF receptor. IκBε associated with newly synthesised c-Rel in activated cells and, in contrast to IκBα and -β, showed enhanced association with p65/c-Rel in TNF-treated cells relative to controls. Studies in IκBε-deficient mice revealed that basal nuclear expression and nuclear translocation of c-Rel at early time-points of receptor ligation were higher in IκBε−/− T and B cells, compared to wild-type. IκBε−/− mice exhibited increased lymph node cellularity and enhanced basal thymidine incorporation by lymphoid cells ex vivo. IκBε−/− T cell blasts were primed for IL-2 expression, relative to wild-type. IκBε−/− splenic B cells showed enhanced survival ex vivo, compared to wild-type, and survival correlated with basal expression of CD40 and induced expression of CD40 and BAFF-R. Enhanced basal nuclear translocation of c-Rel, and upregulation of BAFF-R and CD40 occurred despite increased IκBα expression in IκBε−/− B cells. The data imply that regulation of these c-Rel-dependent lymphoid responses is a non-redundant function of IκBε

    A framework for quantitative security analysis of machine learning

    No full text
    We propose a framework for quantitative security analysis of machine learning methods. The key parts of this framework are the formal specification of a deployed learning model and attacker's constraints, the computation of an optimal attack, and the derivation of an upper bound on adversarial impact. We exemplarily apply the framework for the analysis of one specific learning scenario, online centroid anomaly detection, and experimentally verify the tightness of obtained theoretical bounds
    corecore